Vision System in Digital Transformation
Published on : Thursday 01-07-2021
Gregory Hollows and Adarsha Sarpangala explain how machine vision inspection holds a key role in the digital transformation process.

Digital. Going Digital. Digital transformation. These are the terms we have been hearing in the industry for the last few years. Manufacturing companies are automating things wherever possible and are trying to switch into a smart factory concept in order to optimise efficiency, reduce wastage, downtime and thereby improving total productivity. Machine vision inspection holds a key role in this digital transformation process whether it be in manufacturing, logistics, or in warehouse automation.
An enormous range of vision products are available today from an ever-growing base of manufacturers. For example, In Edmund Optics, we carry more than 1,500 lens options to cover a wide range of applications, and even with these many options they still may not be able to solve each and every needs. Same applies for cameras and illuminations too. To add to the complexity, products from different manufacturers may need to be integrated to produce the best solution. As consumers of these products, engineers must come to the table prepared with the right information and questions to discern which products and suppliers are best-suited to meet their application needs.
Application requirements
The application requirements are the specifications that are directly related to the parts under inspection. A system spec can be generated from this information. To write a good system spec, first put together a complete list of everything the system should be able to inspect, the type of data that it should collect, and the accuracy requirements for that data. Next and more importantly, list the reasons for each area of inspection, as well as how important each inspection is to the desired outcome.
After this information is assembled, engineers must take what may be the hardest step of the application – decide what they want the system to do and what it needs to do. Of course, everyone wants the system to do as much as possible, but in the long run adding what seem to be small feature that are not really required for ultimate system success can lead to greatly increased cost. It can even jeopardize the accuracy achieved in the critical portions of the application. At this point, engineers should consult a vision system integrator. Integrators are good at identifying what operations will be costly to perform and what could be deal breakers.
Two other critical factors are, where will the system be physically placed in the operation and to what extent will it communicate with the material handling equipment? More than anything else, these two issues can drive initial costs as well as cost overruns. These areas must be addressed initially since they will determine many of the system components.
Finally, consider the budget for the system. All areas of cost savings should be considered to get a good idea on how to justify the cost of the system. Many engineers might be shocked at the cost of a good, robust vision system. In most cases, though, even what appears to be a high price can quickly repay itself through higher throughput, improved reliability, fewer customer returns, increased customer satisfaction, less rework, lower downtime and minimum human interaction with the manufacturing process.
For systems that require high levels of measurement accuracy, long working distances, highly reflective parts, complicated part geometry or finishes, or different part sizes on the same line, engineers are well-advised to contact optics and lighting experts. With the preparatory work done, it’s time to start matching optics, lighting, camera and software to build the system’s backbone. For many applications, the camera and software together form the brain of the system, while the optics and lighting are the heart and soul. Both are equally important and must be matched correctly for the system to perform optimally.
Lighting

Lighting the object is usually the first thing that needs to be tackled. In many cases, this is the trickiest area. The more complicated the object’s size or geometry, the more types of materials in the object, the wider the range of material characteristics, and the more highly reflective the object is, the more difficult it will be for the system to create repeatable, even contrast images. In many cases, simple objects can be illuminated by basic, cost-effective, directional lighting. Conversely, complicated objects require multiple lights that require more cumbersome mounting. Such setups are costlier and more sensitive to misalignment.
Illumination sources come with a wide range of options. They have different colour characteristics, lifetimes, and functionalities, variations with temperature and environment, and varying levels of ruggedness. All these issues should be considered when building a system and must be related to the factory environment and not the lab where the system will be initially built.
One final warning on illumination: Changes made on the materials used to make the parts can wreak havoc on a vision system’s ability to perform after the changeover. Most times, this is related to the illumination used. If the process uses various materials, make the integrator aware of this fact up front. Additionally, if the engineer is looking to change the manufacturing process after a system is deployed, verify with the integrator that everything will work or an update is necessary for the system before running the new parts. If not, expect a high rejection rate from the vision system as soon as the switch is made.
Cameras
A variety of camera resolutions, sensor sizes and features are available nowadays. Not having enough pixels on a given feature to yield accurate and repeatable measurements will impact your results. Understanding where algorithms maximise their capabilities will go a long way to yield the desired results, not to mention allowing the engineer to directly correlate how many pixels the imager needs.
For example, consider an object with a group of dark circles on a light background. It’s easy to count how many circles are in a given area even if the imager only has an area of 2 x 2 or 3 x 3 pixels on each circle. Each one will appear to be a dark spot on a bright background and will be easily analysed by the software. Now extend the requirement a bit. What if you wanted to measure the roundness of each circle? Even with the most powerful blob analysis, edge detection and subpixel algorithms, highly accurate and repeatable results will be harder to achieve.
By simply stepping up a level in camera resolution, engineers can increase the number of pixels being analysed and thus greatly leverage those powerful algorithms. Here, integrators with the help of the camera or software provider, can offer an optimal camera to produce the best results.
Lenses
The last part of the system that needs to be determined is the lens. While every part of the system is critical, choosing the wrong optics can make all other efforts in vain. Basic parameters like field of view, working distance, sensor size support, type of camera mount are essential details needed for initial lens selection. In some applications, additional parameters like depth of field, telecentricity, contrast, distortion, etc., are to be considered.
Also there are choices like fixed focal length, Telecentric, variable magnification, zoom lens, etc. One of the most critical things to remember here is that even if two lenses appear to have the same specification, they may not be equivalent products. For example, you may end up with 3 different lenses of 35mm focal length with the same mounting types, same angular FOV, and same F-stop settings. But its resolving power may be different.
Returning to our dots example, all 3 lenses would probably be suitable for counting the dots. But to measure the dots for roundness, you may require lenses with better resolving capability. So it is advisable to contact an optics expert to recommend the best match.
Deployment
With the right camera, lens and lighting in hand, it’s time to deploy and test the system on the shop floor. Bear in mind, a system that works fine in the lab may not work equally as well on the floor. Engineers may need to build enclosures to protect the system against environmental hazards or to eliminate unwanted ambient light. It will cost a little bit of money to get a vision system up and running. Each part of the system has an equal load to bear. Be sure that each gets the time it deserves and that long-term support is sorted out both internally and externally to maximise RoI.

Gregory Hollows is the Vice President of the Imaging Business Unit in Edmund Optics’ Barrington, NJ, USA office. He is responsible for everything pertaining to vision and imaging for Edmund Optics, including the business plan, strategy, and product marketing. He enjoys having the opportunity to affect what imaging products Edmund Optics puts forth, while helping customers solve problems.

Adarsha Sarpangala is Imaging Business Key Account Manager in Edmund Optics India Pvt Ltd, Bengaluru office. He takes care of the machine vision business of Edmund Optics for Indian market. Adarsha has around 10 years of experience in the machine vision industry including selection of cameras, lens, lighting and machine vision solutions. For any queries on the articles or machine vision applications he can be contacted on [email protected]